15 research outputs found

    The Liver Tumor Segmentation Benchmark (LiTS)

    Get PDF
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.Bjoern Menze is supported through the DFG funding (SFB 824, subproject B12) and a Helmut-Horten-Professorship for Biomedical Informatics by the Helmut-Horten-Foundation. Florian Kofler is Supported by Deutsche Forschungsgemeinschaft (DFG) through TUM International Graduate School of Science and Engineering (IGSSE), GSC 81. An Tang was supported by the Fonds de recherche du Québec en Santé and Fondation de l’association des radiologistes du Québec (FRQS- ARQ 34939 Clinical Research Scholarship – Junior 2 Salary Award). Hongwei Bran Li is supported by Forschungskredit (Grant NO. FK-21- 125) from University of Zurich.Peer ReviewedArticle signat per 109 autors/es: Patrick Bilic 1,a,b, Patrick Christ 1,a,b, Hongwei Bran Li 1,2,∗,b, Eugene Vorontsov 3,a,b, Avi Ben-Cohen 5,a, Georgios Kaissis 10,12,15,a, Adi Szeskin 18,a, Colin Jacobs 4,a, Gabriel Efrain Humpire Mamani 4,a, Gabriel Chartrand 26,a, Fabian Lohöfer 12,a, Julian Walter Holch 29,30,69,a, Wieland Sommer 32,a, Felix Hofmann 31,32,a, Alexandre Hostettler 36,a, Naama Lev-Cohain 38,a, Michal Drozdzal 34,a, Michal Marianne Amitai 35,a, Refael Vivanti 37,a, Jacob Sosna 38,a, Ivan Ezhov 1, Anjany Sekuboyina 1,2, Fernando Navarro 1,76,78, Florian Kofler 1,13,57,78, Johannes C. Paetzold 15,16, Suprosanna Shit 1, Xiaobin Hu 1, Jana Lipková 17, Markus Rempfler 1, Marie Piraud 57,1, Jan Kirschke 13, Benedikt Wiestler 13, Zhiheng Zhang 14, Christian Hülsemeyer 1, Marcel Beetz 1, Florian Ettlinger 1, Michela Antonelli 9, Woong Bae 73, Míriam Bellver 43, Lei Bi 61, Hao Chen 39, Grzegorz Chlebus 62,64, Erik B. Dam 72, Qi Dou 41, Chi-Wing Fu 41, Bogdan Georgescu 60, Xavier Giró-i-Nieto 45, Felix Gruen 28, Xu Han 77, Pheng-Ann Heng 41, Jürgen Hesser 48,49,50, Jan Hendrik Moltz 62, Christian Igel 72, Fabian Isensee 69,70, Paul Jäger 69,70, Fucang Jia 75, Krishna Chaitanya Kaluva 21, Mahendra Khened 21, Ildoo Kim 73, Jae-Hun Kim 53, Sungwoong Kim 73, Simon Kohl 69, Tomasz Konopczynski 49, Avinash Kori 21, Ganapathy Krishnamurthi 21, Fan Li 22, Hongchao Li 11, Junbo Li 8, Xiaomeng Li 40, John Lowengrub 66,67,68, Jun Ma 54, Klaus Maier-Hein 69,70,7, Kevis-Kokitsi Maninis 44, Hans Meine 62,65, Dorit Merhof 74, Akshay Pai 72, Mathias Perslev 72, Jens Petersen 69, Jordi Pont-Tuset 44, Jin Qi 56, Xiaojuan Qi 40, Oliver Rippel 74, Karsten Roth 47, Ignacio Sarasua 51,12, Andrea Schenk 62,63, Zengming Shen 59,60, Jordi Torres 46,43, Christian Wachinger 51,12,1, Chunliang Wang 42, Leon Weninger 74, Jianrong Wu 25, Daguang Xu 71, Xiaoping Yang 55, Simon Chun-Ho Yu 58, Yading Yuan 52, Miao Yue 20, Liping Zhang 58, Jorge Cardoso 9, Spyridon Bakas 19,23,24, Rickmer Braren 6,12,30,a, Volker Heinemann 33,a, Christopher Pal 3,a, An Tang 27,a, Samuel Kadoury 3,a, Luc Soler 36,a, Bram van Ginneken 4,a, Hayit Greenspan 5,a, Leo Joskowicz 18,a, Bjoern Menze 1,2,a // 1 Department of Informatics, Technical University of Munich, Germany; 2 Department of Quantitative Biomedicine, University of Zurich, Switzerland; 3 Ecole Polytechnique de Montréal, Canada; 4 Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; 5 Department of Biomedical Engineering, Tel-Aviv University, Israel; 6 German Cancer Consortium (DKTK), Germany; 7 Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; 8 Philips Research China, Philips China Innovation Campus, Shanghai, China; 9 School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK; 10 Institute for AI in Medicine, Technical University of Munich, Germany; 11 Department of Computer Science, Guangdong University of Foreign Studies, China; 12 Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; 13 Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; 14 Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China; 15 Department of Computing, Imperial College London, London, United Kingdom; 16 Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany; 17 Brigham and Women’s Hospital, Harvard Medical School, USA; 18 School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel; 19 Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; 20 CGG Services (Singapore) Pte. Ltd., Singapore; 21 Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India; 22 Sensetime, Shanghai, China; 23 Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; 24 Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA; 25 Tencent Healthcare (Shenzhen) Co., Ltd, China; 26 The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada; 27 Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada; 28 Institute of Control Engineering, Technische Universität Braunschweig, Germany; 29 Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; 30 Comprehensive Cancer Center Munich, Munich, Germany; 31 Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; 32 Department of Radiology, University Hospital, LMU Munich, Germany; 33 Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany; 34 Polytechnique Montréal, Mila, QC, Canada; 35 Department of Diagnostic Radiology, Sheba Medical Center, Tel Aviv university, Israel; 36 Department of Surgical Data Science, Institut de Recherche contre les Cancers de l’Appareil Digestif (IRCAD), France; 37 Rafael Advanced Defense System, Israel; 38 Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel; 39 Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China; 40 Department of Electrical and Electronic Engineering, The University of Hong Kong, China; 41 Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; 42 Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden; 43 Barcelona Supercomputing Center, Barcelona, Spain; 44 Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland; 45 Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain; 46 Universitat Politecnica de Catalunya, Catalonia, Spain; 47 University of Tuebingen, Germany; 48 Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; 49 Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; 50 Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany; 51 Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany; 52 Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA; 53 Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea; 54 Department of Mathematics, Nanjing University of Science and Technology, China; 55 Department of Mathematics, Nanjing University, China; 56 School of Information and Communication Engineering, University of Electronic Science and Technology of China, China; 57 Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; 58 Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China; 59 Beckman Institute, University of Illinois at Urbana-Champaign, USA; 60 Siemens Healthineers, USA; 61 School of Computer Science, the University of Sydney, Australia; 62 Fraunhofer MEVIS, Bremen, Germany; 63 Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany; 64 Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands; 65 Medical Image Computing Group, FB3, University of Bremen, Germany; 66 Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; 67 Center for Complex Biological Systems, University of California, Irvine, USA; 68 Chao Family Comprehensive Cancer Center, University of California, Irvine, USA; 69 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; 70 Helmholtz Imaging, Germany; 71 NVIDIA, Santa Clara, CA, USA; 72 Department of Computer Science, University of Copenhagen, Denmark; 73 Kakao Brain, Republic of Korea; 74 Institute of Imaging & Computer Vision, RWTH Aachen University, Germany; 75 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China; 76 Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; 77 Department of computer science, UNC Chapel Hill, USA; 78 TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, GermanyPostprint (published version

    A large annotated medical image dataset for the development and evaluation of segmentation algorithms

    Get PDF
    Semantic segmentation of medical images aims to associate a pixel with a label in a medical image without human initialization. The success of semantic segmentation algorithms is contingent on the availability of high-quality imaging data with corresponding labels provided by experts. We sought to create a large collection of annotated medical image datasets of various clinically relevant anatomies available under open source license to facilitate the development of semantic segmentation algorithms. Such a resource would allow: 1) objective assessment of general-purpose segmentation methods through comprehensive benchmarking and 2) open and free access to medical image data for any researcher interested in the problem domain. Through a multi-institutional effort, we generated a large, curated dataset representative of several highly variable segmentation tasks that was used in a crowd-sourced challenge - the Medical Segmentation Decathlon held during the 2018 Medical Image Computing and Computer Aided Interventions Conference in Granada, Spain. Here, we describe these ten labeled image datasets so that these data may be effectively reused by the research community

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094

    The Medical Segmentation Decathlon

    Full text link
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)—a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training

    The Medical Segmentation Decathlon

    Get PDF
    International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task. Segmentation is so far the most widely investigated medical image processing task, but the various segmentation challenges have typically been organized in isolation, such that algorithm development was driven by the need to tackle a single specific clinical problem. We hypothesized that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. To investigate the hypothesis, we organized the Medical Segmentation Decathlon (MSD) - a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities. The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. The MSD challenge confirmed that algorithms with a consistent good performance on a set of tasks preserved their good average performance on a different set of previously unseen tasks. Moreover, by monitoring the MSD winner for two years, we found that this algorithm continued generalizing well to a wide range of other clinical problems, further confirming our hypothesis. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms are mature, accurate, and generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to non AI experts

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LITS) organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2016 and International Conference On Medical Image Computing Computer Assisted Intervention (MICCAI) 2017. Twenty four valid state-of-the-art liver and liver tumor segmentation algorithms were applied to a set of 131 computed tomography (CT) volumes with different types of tumor contrast levels (hyper-/hypo-intense), abnormalities in tissues (metastasectomie) size and varying amount of lesions. The submitted algorithms have been tested on 70 undisclosed volumes. The dataset is created in collaboration with seven hospitals and research institutions and manually reviewed by independent three radiologists. We found that not a single algorithm performed best for liver and tumors. The best liver segmentation algorithm achieved a Dice score of 0.96(MICCAI) whereas for tumor segmentation the best algorithm evaluated at 0.67(ISBI) and 0.70(MICCAI). The LITS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.Comment: conferenc

    Conditional Deletion of Histone Deacetylase 1 in T Cells Leads to Enhanced Airway Inflammation and Increased Th2 Cytokine Production.

    No full text
    Chromatin modifications, such as reversible histone acetylation, play a key role in the regulation of T cell development and function. However, the role of individual histone deacetylases (HDACs) in T cells is less well understood. In this article, we show by conditional gene targeting that T cell-specific loss of HDAC1 led to an increased inflammatory response in an in vivo allergic airway inflammation model. Mice with HDAC1-deficient T cells displayed an increase in all critical parameters in this Th2-type asthma model, such as eosinophil recruitment into the lung, mucus hypersecretion, parenchymal lung inflammation, and enhanced airway resistance. This correlated with enhanced Th2 cytokine production in HDAC1-deficient T cells isolated from diseased mice. In vitro-polarized HDAC1-deficient Th2 cells showed a similar enhancement of IL-4 expression, which was evident already at day 3 of Th2 differentiation cultures and restricted to T cell subsets that underwent several rounds of cell divisions. HDAC1 was recruited to the Il4 gene locus in ex vivo isolated nonstimulated CD4(+) T cells, indicating a direct control of the Il4 gene locus. Our data provide genetic evidence that HDAC1 is an essential HDAC that controls the magnitude of an inflammatory response by modulating cytokine expression in effector T cells

    The Liver Tumor Segmentation Benchmark (LiTS)

    No full text
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.ISSN:1361-8415ISSN:1361-842
    corecore